12 research outputs found

    Interactive Authentication

    Get PDF
    Authentication is the first, crucial step in securing digital assets like cryptocurrencies and online services like banking and social networks. It relies on principals maintaining exclusive access to credentials like cryptographic signing keys, passwords, and physical devices. But both individuals and organizations struggle to manage their credentials, resulting in loss of assets and identity theft. Multi-factor authentication improves security, but its analysis and design are mostly limited to one-shot mechanisms, which decide immediately. In this work, we study mechanisms with back-and-forth interaction with the principals. For example, a user receives an email notification about sending money from her bank account and is given a period of time to abort the operation. We formally define the authentication problem, where an authentication mechanism interacts with a user and an attacker and tries to identify the user. A mechanism\u27s success depends on the scenario~-- whether the user / attacker know the different credentials; each credential can be safe, lost, leaked, or stolen. The profile of a mechanism is the set of all scenarios in which it succeeds. Thus, we have a partial order on mechanisms, defined by the subset relation on their profiles. We find an upper bound on the profile size and discover three types of nn-credential mechanisms (for any nn) that are maximally secure, meeting this bound. We show these are all the unique maximal mechanisms for n≤3n \le 3. We show the efficacy of our model by analyzing existing mechanisms, both theoretical and deployed in widely-used systems, and make concrete improvement proposals. We demonstrate the practicality of our mechanisms by implementing a maximally-secure cryptocurrency wallet

    Buying Time: Latency Racing vs. Bidding in Fair Transaction Ordering

    Full text link
    We design a practical algorithm for transaction ordering that takes into account both transaction timestamps and bids. The algorithm guarantees that users get their transactions published with bounded delay against a bid, while it extracts a fair value from sophisticated users that have an edge in latency, by moving expenditure from investment in latency improvement technology to bidding. The algorithm creates a score from timestamps and bids, and orders transactions based on the score. We first show that a scoring rule is the only type of rule that satisfies the independence of latency races. We provide an economic analysis of the protocol in an environment of private information, where investment in latency is made ex-ante or interim stages, while bidding happens at the interim stage where private signals have been observed. The algorithm is useful for transaction sequencing in rollups or in other environments where the sequencer has privileged access to order flows

    Truncator: Time-space Tradeoff of Cryptographic Primitives

    Get PDF
    We present mining-based techniques to reduce the size of various cryptographic outputs without loss of security. Our approach can be generalized for multiple primitives, such as cryptographic key generation, signing, hashing and encryption schemes, by introducing a brute-forcing step to provers/senders aiming at compressing submitted cryptographic material. Interestingly, mining can result in record-size cryptographic outputs, and we show that 5%-12% shorter hash digests and signatures are practically feasible even with commodity hardware. As a result, our techniques make compressing addresses and transaction signatures possible in order to pay less fees in blockchain applications while decreasing the demand for blockchain space, a major bottleneck for initial syncing, communication and storage. Also, the effects of compressing once - then reuse\u27\u27 at mass scale can be economically profitable in the long run for both the Web2 and Web3 ecosystems. Our paradigm relies on a brute-force search operation in order to craft the primitive\u27s output such that it fits into fewer bytes, while the missing fixed bytes are implied by the system parameters and omitted from the actual communication. While such compression requires computational effort depending on the level of compression, this cost is only paid at the source (i.e. in blockchains, senders are rewarded by lowered transaction fees), and the benefits of the compression are enjoyed by the whole ecosystem. As a starting point, we show how our paradigm applies to some basic primitives commonly used in blockchain applications but also traditional Web2 transactions (such as shorter digital certificates), and show how security is preserved using a bit security framework. Surprisingly, we also identified cases where wise mining strategies require proportionally less effort than naive brute-forcing, shorter hash-based signatures being one of the best examples. We also evaluate our approach for several primitives based on different levels of compression. Our evaluation concretely demonstrates the benefits both in terms of financial cost and storage if adopted by the community, and we showcase how our technique can achieve up to 83.21% reduction in smart contract gas fees at a cost of less than 4 seconds of computation on a single core

    Clockwork Finance: Automated Analysis of Economic Security in Smart Contracts

    Get PDF
    We introduce the Clockwork Finance Framework (CFF), a general purpose, formal verification framework for mechanized reasoning about the economic security properties of composed decentralized-finance (DeFi) smart contracts. CFF features three key properties. It is contract complete, meaning that it can model any smart contract platform and all its contracts—Turing complete or otherwise. It does so with asymptotically constant model overhead. It is also attack-exhaustive by construction, meaning that it can automatically and mechanically extract all possible economic attacks on users’ cryptocurrency across modeled contracts. Thanks to these properties, CFF can support multiple goals: economic security analysis of contracts by developers, analysis of DeFi trading risks by users, fees UX, and optimization of arbitrage opportunities by bots or miners. Because CFF offers composability, it can support these goals with reasoning over any desired set of potentially interacting smart contract models. We instantiate CFF as an executable model for Ethereum contracts that incorporates a state-of-the-art deductive verifier. Building on previous work, we introduce extractable value (EV), a new formal notion of economic security in composed DeFi contracts that is both a basis for CFF and of general interest. We construct modular, human-readable, composable CFF models of four popular, deployed DeFi protocols in Ethereum: Uniswap, Uniswap V2, Sushiswap, and MakerDAO, representing a combined 24 billion USD in value as of March 2022. We use these models along with some other common models such as flash loans, airdrops and voting to show experimentally that CFF is practical and can drive useful, data-based EV-based insights from real world transaction activity. Without any explicitly programmed attack strategies, CFF uncovers on average an expected $56 million of EV per month in the recent past

    Secure Poisson Regression

    Get PDF
    We introduce the first construction for secure two-party computation of Poisson regression, which enables two parties who hold shares of the input samples to learn only the resulting Poisson model while protecting the privacy of the inputs. Our construction relies on new protocols for secure fixed-point exponentiation and correlated matrix multiplications. Our secure exponentiation construction avoids expensive bit decomposition and achieves orders of magnitude improvement in both online and offline costs over state of the art works. As a result, the dominant cost for our secure Poisson regression are matrix multiplications with one fixed matrix. We introduce a new technique, called correlated Beaver triples, which enables many such multiplications at the cost of roughly one matrix multiplication. This further brings down the cost of secure Poisson regression. We implement our constructions and show their extreme efficiency. In a LAN setting, our secure exponentiation for 20-bit fractional precision takes less than 0.07ms with a batch-size of 100,000. One iteration of secure Poisson regression on a dataset with 10,000 samples with 1000 binary features needs about 65.82s in the offline phase, 55.14s in the online phase and 17MB total communication. For several real datasets this translates into training that takes seconds and only a couple of MB communication

    Zef: Low-latency, Scalable, Private Payments

    Get PDF
    We introduce Zef, the first Byzantine-Fault Tolerant (BFT) protocol to support payments in anonymous digital coins at arbitrary scale. Zef follows the communication and security model of FastPay: both protocols are asynchronous, low-latency, linearly-scalable, and powered by partially-trusted sharded authorities. In contrast with FastPay, user accounts in Zef are uniquely-identified and safely removable. Zef coins are bound to an account by a digital certificate and otherwise stored off-chain by their owners. To create and redeem coins, users interact with the protocol via privacy-preserving operations: Zef uses randomized commitments and NIZK proofs to hide coin values; and, created coins are made unlinkable using the blind and randomizable threshold anonymous credentials of Coconut. Besides the detailed specifications and our analysis of the protocol, we are making available an open-source implementation of Zef in Rust. Our extensive benchmarks on AWS confirm textbook linear scalability and demonstrate a confirmation time under one second at nominal capacity. Compared to existing anonymous payment systems based on a blockchain, this represents a latency speedup of three orders of magnitude, with no theoretical limit on throughput

    One-Message Secure Reductions: On the Cost of Converting Correlations

    Get PDF
    Correlated secret randomness is a useful resource for secure computation protocols, often enabling dramatic speedups compared to protocols in the plain model. This has motivated a line of work on identifying and securely generating useful correlations. Different kinds of correlations can vary greatly in terms of usefulness and ease of generation. While there has been major progress on efficiently generating oblivious transfer (OT) correlations, other useful kinds of correlations are much more costly to generate. Thus, it is highly desirable to develop efficient techniques for securely converting copies of a given source correlation into copies of a given target correlation, especially when the former are cheaper to generate than the latter. In this work, we initiate a systematic study of such conversions that only involve a single uni-directional message. We refer to such a conversion as a one-message secure reduction (OMSR). Recent works (Agarwal et al, Eurocrypt 2022; Khorasgani et al, Eurocrypt 2022) studied a similar problem when no communication is allowed; this setting is quite restrictive, however, with few non-trivial conversions being feasible. The OMSR setting substantially expands the scope of feasible results, allowing for direct applications to existing MPC protocols. We obtain the following positive and negative results. - OMSR constructions. We present a general rejection-sampling based technique for OMSR with OT source correlations. We apply it to substantially improve in the communication complexity of optimized protocols for distributed symmetric cryptography (Dinur et al., Crypto 2021). - OMSR lower bounds. We develop general techniques for proving lower bounds on the communication complexity of OMSR, matching our positive results up to small constant factors

    Themis: Fast, Strong Order-Fairness in Byzantine Consensus

    Get PDF
    We introduce Themis, a scheme for introducing fair ordering of transactions into (permissioned) Byzantine consensus protocols with at most ff faulty nodes among n≥4f+1n \geq 4f +1. Themis enforces the strongest notion of fair ordering proposed to date. It also achieves standard liveness, rather than the weaker notion of previous work with the same fair ordering property. We show experimentally that Themis can be integrated into state-of-the-art consensus protocols with minimal modification or performance overhead. Additionally, we introduce a suite of experiments of general interest for evaluating the practical strength of various notions of fair ordering and the resilience of fair-ordering protocols to adversarial manipulation. We use this suite of experiments to show that the notion of fair ordering enforced by Themis is significantly stronger in practical settings than those of competing systems. We believe Themis offers strong practical protection against many types of transaction-ordering attacks---such as front-running and back-running---that are currently impacting commonly used smart contract systems

    Complete Knowledge: Preventing Encumbrance of Cryptographic Secrets

    Get PDF
    Most cryptographic protocols model a player’s knowledge of secrets in a simple way. Informally, the player knows a secret in the sense that she can directly furnish it as a (private) input to a protocol, e.g., to digitally sign a message. The growing availability of Trusted Execution Environments (TEEs) and secure multiparty computation, however, undermines this model of knowledge. Such tools can encumber a secret sk and permit a chosen player to access sk conditionally, without actually knowing sk. By permitting selective access to sk by an adversary, encumbrance of secrets can enable vote-selling in cryptographic voting schemes, illegal sale of credentials for online services, and erosion of deniability in anonymous messaging systems. Unfortunately, existing proof-of-knowledge protocols fail to demonstrate that a secret is unencumbered. We therefore introduce and formalize a new notion called complete knowledge (CK). A proof (or argument) of CK shows that a prover does not just know a secret, but also has fully unencumbered knowledge, i.e., unrestricted ability to use the secret. We introduce two practical CK schemes that use special-purpose hardware, specifically TEEs and off-the-shelf mining ASICs. We prove the security of these schemes and explore their practical deployment with a complete, end-to-end prototype that supports both. We show how CK can address encumbrance attacks identified in previous work. Finally, we introduce two new applications enabled by CK that involve proving ownership of blockchain assets

    STROBE: Stake-based Threshold Random Beacons

    Get PDF
    We revisit decentralized random beacons with a focus on practical distributed applications. Decentralized random beacons (Beaver and So, Eurocrypt 1993) provide the functionality for nn parties to generate an unpredictable sequence of bits in a way that cannot be biased, which is useful for any decentralized protocol requiring trusted randomness. Existing beacon constructions are highly inefficient in practical settings where protocol parties need to rejoin after crashes or disconnections, and more significantly where smart contracts may rely on arbitrary index points in high-volume streams. For this, we introduce a new notion of history-generating decentralized random beacons (HGDRBs). Roughly, the history-generation property of HGDRBs allows for previous beacon outputs to be efficiently generated knowing only the current value and the public key. At application layers, history-generation supports registering a sparser set of on-chain values if desired, so that apps like lotteries can utilize on-chain values without incurring high-frequency costs, enjoying all the benefits of DRBs implemented off-chain or with decoupled, special-purpose chains. Unlike rollups, HG is tailored specifically to recovering and verifying pseudorandom bit sequences and thus enjoys unique optimizations investigated in this work. We introduce STROBE: an efficient HGDRB construction which generalizes the original squaring-based RSA approach of Beaver and So. STROBE enjoys several useful properties that make it suited for practical applications that use beacons: - history-generating: it can regenerate and verify high-throughput beacon streams, supporting sparse (thus cost-effective) ledger entries; - concisely self-verifying: NIZK-free, with state and validation employing a single ring element; - eco-friendly: stake-based rather than work based; - unbounded: refresh-free, addressing limitations of Beaver and So; - delay-free: results are immediately available
    corecore